Higher-order asymptotic expansions of the least-squares estimation bias in first-order dynamic regression models
نویسندگان
چکیده
An approximation to order T 2 is obtained for the bias of the full vector of leastsquares estimates in general stable but not necessarily stationary ARX(1) models with normal disturbances. This yields generalizations, allowing for various forms of initial conditions, of Kendalls and Whites classic results for stationary AR(1) models. The accuracy of various alternative approximations is examined and compared by simulation for particular parametrizations of AR(1) and ARX(1) models. The results show that often the second-order approximation is considerably better than its rst order counterpart and hence opens perspectives for improved bias correction. However, we also nd that order T 2 approximations are more vulnerable in the near unit root case than the much simpler order T 1 approximations. 1. Introduction and framework The statistical literature concerned with the use of asymptotics for approximating statistical phenomena is vast. The overview by Pierce and Peters (1992) is one of a number of important contributions and while this article and many others focus on the use of higher-order asymptotics to improve inference, there is also considerable interest in their application to analysing the bias of ML estimators; see, for example, Cox and Snell (1968) and Copas (1988), who discuss a general method for approximating the ML estimation bias to the order of T ; where T is the sample size, using an asymptotic expansion of the score function (see also Firths contribution to the discussion in Pierce and Peters, 1992). While Firth (1993), on noting that bias corrected ML estimators are, quite generally, second-order e¢ cient, shows that in regular parametric problems this rst-order term is removed by a suitable modi cation of the score function, Kass (1992) commented that when the rst-order asymptotic approximation to a density is poor but not horrible, the higher-order approximation usually mops up most of Tinbergen Institute and Amsterdam School of Economics, University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands ([email protected]) y Cardi¤ Business School, Aberconway Building, Colum Drive, CF10 3EU, Cardi¤, Wales, UK (phillipsgd1@cardi¤.ac.uk). the error. One purpose of this paper is to examine this type of phenomenon in the context of bias approximation in autoregressive models by comparing the rst-order and the second-order approximations in a number of cases. The use of asymptotic expansions in approximating the moments of estimators in stable autoregressive models has a relatively long history. The early work focused on the least-squares estimator of the serial correlation coe¢ cient in the simplest autoregressive Gaussian process. See, for example, Bartlett (1946), who found a rst-order variance approximation, and Hurwicz (1950), who obtained moment approximations for the case T = 3. Later White (1960) and Shenton and Johnson (1965) found higher-order approximations in terms of powers of T for the rst two moments in the AR(1) model. For the case of an AR(1) model with an intercept, Kendall (1954) and Marriott and Pope (1954) gave an approximation to the bias of the leastsquares estimator of the lagged-dependent variable coe¢ cient to the order of T . Higher-order approximations to the bias in the vector of the least-squares coe¢ cient estimator in normal autoregressive models with or without an intercept or with any further exogenous explanatory variables were obtained by us in a very early version of this paper1, but remained unpublished because until recently we couldnt prove the general validity of these approximations. In Kiviet and Phillips (2009), however, which focuses on improved variance estimation in autoregressive models, we provide a general proof in which the order in a power of T is established of the remainder term in any higher-order expansion yielding an approximation to rst or higherorder moments of a linear least-squares estimator. The proof only requires assumptions on the existence of particular data moments and the di¤erentiability of the non-linear function of the data moments which identi es and establishes the least-squares estimator. These assumptions are rather mild and will hold in the dynamic regression model to be examined here. Research on the accuracy of the approximations published thus far has shown that the higher-order results of White are very accurate and also that Kendalls rst-order approximation is often surprisingly good. For evidence on these points, see Sawa (1978) and Nankervis and Savin (1988). Their exact results both con rm the severity of the bias problem and demonstrate the quality of some of the approximations. In the context of the AR(1) model with intercept, Monte Carlo results by Orcutt and Winokur (1969) provide both additional evidence on these matters and an illustration of how bias correction based on Kendalls approximation can be e¤ective in not only reducing bias but in lowering the mean-squared error (MSE) as well. This latter point has been noted too by Rudebusch (1993), who uses Kendalls approximation and an approximation for higher-order AR models in bias corrected estimators when investigating whether real GNP is trend-stationary or di¤erence-stationary. The rst-order estimation bias in higher-order autoregressive processes has been examined by Shaman and Stine (1988), and in multivariate autoregressive processes by Tjøstheim and Paulsen (1983) and by Nicholls and Pope (1988). Naturally, the accuracy of asymptotic approximations is limited and depends on the order of the approximation, the actual size of the sample, but usually also on the model parameters and design, and on initial conditions. If the accuracy of a rst-order approximation falls short for a speci c case, then it seems recommended to examine a higher-order approximation, although considerable analytic problems may be incurred. Evans and Savin (1981) demonstrate the e¤ectiveness of particular higher-order results in the AR(1) model without intercept. For multi-parameter static simultaneous equations models the seminal paper of Nagar (1959) provided approximations to the moments of consistent k-class estimators. In particular they include a bias approximation to the order of T . The results were later con rmed by Kadane (1971) using the approach of small disturbance asymptotics. Mikhail (1972) suggested that the rst-order approximation to the bias may be inaccurate in some cases and he ex1This paper (same title) was presented at the Econometric Society World Conference 1995 held in Tokyo.
منابع مشابه
Improved variance estimation of maximum likelihood estimators in stable first-order dynamic regression models
In dynamic regression models conditional maximum likelihood (least-squares) coeffi cient and variance estimators are biased. From expansions of the coeffi cient variance and its estimator we obtain an approximation to the bias in variance estimation and a bias corrected variance estimator, for both the standard and a bias corrected coeffi cient estimator. These enable a comparison of their mean...
متن کاملSecond Order Moment Asymptotic Expansions for a Randomly Stopped and Standardized Sum
This paper establishes the first four moment expansions to the order o(a^−1) of S_{t_{a}}^{prime }/sqrt{t_{a}}, where S_{n}^{prime }=sum_{i=1}^{n}Y_{i} is a simple random walk with E(Yi) = 0, and ta is a stopping time given by t_{a}=inf left{ ngeq 1:n+S_{n}+zeta _{n}>aright} where S_{n}=sum_{i=1}^{n}X_{i} is another simple random walk with E(Xi) = 0, and {zeta _{n},ngeq 1} is a sequence of ran...
متن کاملComparison of Bootstrap and Jackknife Variance Estimators in Linear Regression: Second Order Results
In an extension of the work of Liu and Singh (1992), we consider resampling estimates for the variance of the least squares estimator in linear regression models. Second order terms in asymptotic expansions of these estimates are derived. By comparing the second order terms, certain generalised bootstrap schemes are seen to be theoretically better than other resampling techniques under very gen...
متن کاملDerivative estimation based on difference sequence via locally weighted least squares regression
A new method is proposed for estimating derivatives of a nonparametric regression function. By applying Taylor expansion technique to a derived symmetric difference sequence, we obtain a sequence of approximate linear regression representation in which the derivative is just the intercept term. Using locally weighted least squares, we estimate the derivative in the linear regression model. The ...
متن کاملThe Effect of Microaggregation Procedures on the Estimation of Linear Models: A Simulation Study
Microaggregation is a set of procedures that distort empirical data in order to guarantee the factual anonymity of the data. At the same time the information content of data sets should not be reduced too much and should still be useful for scientific research. This paper investigates the effect of microaggregation on the estimation of a linear regression by ordinary least squares. It studies, ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Computational Statistics & Data Analysis
دوره 56 شماره
صفحات -
تاریخ انتشار 2012